In [1]:
# simple lti routines in this module
include("lti.jl")
using LTI
# plotting module
using PyPlot


Vendor:  Continuum Analytics, Inc.
Package: mkl
Message: trial mode expires in 24 days
INFO: Loading help data...

Greedy error measure of two eigenspectra

Find and remove closest neighbors in s2 untill s1 is depleted


In [2]:
function specErr(s1, s2)
    n1 = length(s1); n2 = length(s2);
    assert(n1 <= n2)
    mask = ones(Bool, n2);
    err = 0.0;
    for k in 1:n1
        d, idx = findmin(abs(s1[k] - s2[mask]).^2)
        err += d
        mask[idx] = false
    end
    return err
end


Out[2]:
specErr (generic function with 1 method)

Dynamical system parameters

  • Dynamics matrix, $A$, is $N$ by $N$ with $K$ fast modes and $N - K$ slow modes. $A$ is generated in one of two ways:

    1. Normal, no imaginary eigenvalues by calling: A, _, _ = genDyn(N, K, dFast, dSlowLow, dSlowDiff), in which case $A$ will have $K$ slow eigenvalues drawn uniform randomly between dSlowLow and dSlowLow + dSlowDiff, and $N - K$ eigenvalues at dFast.
    2. Normal, with imaginary eigevalues by calling: A, _, _ = genDynRot(N, K, dFast, dSlowLow, dSlowDiff, omegaScale), in which case $A$ will have $K / 2$ (then their complements) slow eigenvalues whose magnitudes are drawn uniformly randomly between dSlowLow and dSlowLow + dSlowDiff with phases drawn uniformly randomly between -omegaScale and omegaScale. There are then $N - K$ eigenvalues at dFast.
  • Observation matrix, $C$, is $M$ by $N$, and generated by sampling $M$ random rows of the identity matrix.

  • $\Pi$ is the steady state covariance matrix satisfying $A\Pi A^T + Q = \Pi$

Dynamical system evolution

The system's state $x_t$ evolves according to, $$x_{t + 1} = Ax_t + \xi_t,\text{ with }\xi_t \sim \mathcal{N}(0, Q) \\ y_{t} = Cx_{t} + \eta_t,\text{ with }\eta_t \sim \mathcal{N}(0, R = rI) \\ x_1 \sim \mathcal{N}(0, \Pi)$$


In [30]:
# parameters
K = 4; N = 996; r = 0.00001; dly = 3
dFast = 0.1; dSlowLow = 0.9; dSlowDiff = 0; omegaScale = pi / 3;
# input and observational noise
Q = eye(N + K) * 1.0;
# dynamics as well as its eigen-decomposition
A, U, D = genDynRot(N + K, K, dFast, dSlowLow, dSlowDiff, omegaScale);
# A = [0.912 0.129 0.0308 -0.0028; -0.1064 0.9521 0.1709 -0.1174; -0.0692 -0.1469 0.9258 -0.0659; 0.0384 0.1269 0.0210 0.9125];
# D, U = eig(A)
# stationary covariance and its sqrt for the states
P = dlyap(A, Q); p = real(sqrtm(P));

# one Kalman-Ho trial
function trial(M, T; guessK=K)
    C = genObsSamp(N, M); R = eye(M) * r;
    _, Y = runLTI(A, C, Q, R, int(T), P=P);
    Ap, _ = hoKalman(Y, 3, guessK);
    Dp = eig(Ap)[1];
    Ep = specErr(Dp, D[1:guessK]);
    return Dp, Ep;
end


Out[30]:
trial (generic function with 1 method)

Recovering the slow modes

Basically, when there's no gap, recovered eigenvalues are bad

Small red dots are the true eigenvalues. Large blue dots are the eigenvalues recovered


In [31]:
M = 50; T = 200;

C = genObsSamp(N + K, M); R = eye(M) * r;
_, Y = runLTI(A, C, Q, R, T, P = P);
Ap, H = hoKalman(Y, dly, K);
s = eigs(Y * Y' / T; nev = K * 6)[1]
Dp = eig(Ap)[1];

figure(figsize=(6, 2))
subplot(121)
plot(1:length(s), s, "r.-", markersize=5)
xlabel("Index"); ylabel("Eigenvalues")
xticks([]); yticks([]); title("Covariance eigenvalues")
subplot(122)
plot(real(D), imag(D), "ro", markersize=8, alpha=0.5)
plot(real(Dp), imag(Dp), "kx", linewidth=1, markeredgewidth=1)
xlim([0, 1]); ylim([-0.8, 0.8])
xlabel("Real"); ylabel("Imag")
xticks([0, 1]); yticks([]); title("A's eigenvalues")


Out[31]:
PyObject <matplotlib.text.Text object at 0x7f2e4f1dd550>

Spread of recovered eigenvalues


In [5]:
function visRecovery(;M=1, T=10, nTrial=10, guessK=K)
    Dp = Complex64[]
    Es = Float64[]
    for kTrial in 1:nTrial
        tmp = trial(M, T, guessK=guessK)
        Dp = [Dp, tmp[1]];
        Es = [Es, tmp[2]];
    end
    return Dp, Es
end

Dp, Es = visRecovery(M=20, T=10000, nTrial=100, guessK=K);
plot(
    layer(x=real(D), y=imag(D), Geom.point, Theme(default_color=color("red"), default_point_size=1mm)),
    layer(x=real(Dp), y=imag(Dp), Geom.point, Theme(default_point_size=0.5mm)),
    Scale.x_continuous(minvalue=0, maxvalue=1), Scale.y_continuous(minvalue=-1, maxvalue=1))


DimensionMismatch("")
while loading In[5], in expression starting on line 12

 in gemv! at linalg/matmul.jl:213
 in * at linalg/matmul.jl:68
 in runLTI at /home/prgao/Dropbox/Peiran/Prediction/shenoy_lab_data/ssid/lti.jl:55
 in trial at In[3]:16
 in visRecovery at In[5]:5

Spectrum error distribution using the greedy error measure


In [468]:
print(mean(Es))
plot(x=Es, Geom.density)


0.05197711917593677
Out[468]:
x -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 -1.50 -1.45 -1.40 -1.35 -1.30 -1.25 -1.20 -1.15 -1.10 -1.05 -1.00 -0.95 -0.90 -0.85 -0.80 -0.75 -0.70 -0.65 -0.60 -0.55 -0.50 -0.45 -0.40 -0.35 -0.30 -0.25 -0.20 -0.15 -0.10 -0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 1.45 1.50 1.55 1.60 1.65 1.70 1.75 1.80 1.85 1.90 1.95 2.00 2.05 2.10 2.15 2.20 2.25 2.30 2.35 2.40 2.45 2.50 2.55 2.60 2.65 2.70 2.75 2.80 2.85 2.90 2.95 3.00 -2 0 2 4 -1.5 -1.4 -1.3 -1.2 -1.1 -1.0 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.0 -100 -80 -60 -40 -20 0 20 40 60 80 100 120 140 160 180 -80 -75 -70 -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115 120 125 130 135 140 145 150 155 160 -100 0 100 200 -80 -75 -70 -65 -60 -55 -50 -45 -40 -35 -30 -25 -20 -15 -10 -5 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 105 110 115 120 125 130 135 140 145 150 155 160

Spectrum error in the M-by-T plane with a guessed K


In [473]:
Ms = K - 2:4:100; dly = 10;
# Ts = 10:4:100;
Ts = logspace(log10(10), log10(2000), 25);
nTrial = 10;
Es = zeros(length(Ms), length(Ts), nTrial)

for (kM, M) in enumerate(Ms)
    for (kT, T) in enumerate(Ts)
        for kTrial in 1:nTrial
            tmp = trial(M, T, guessK=K)
            Es[kM, kT, kTrial] = tmp[2]
        end
    end
end

In [474]:
using PyPlot
imshow(squeeze(mean(Es, 3), 3) / sum(abs(D[1:4]).^2), interpolation="nearest", aspect="auto", origin="lower",
extent=[minimum(log10(Ts)), maximum(log10(Ts)), minimum(Ms), maximum(Ms)], vmin=0, vmax=2)
xlabel("log10(T)"); ylabel("M"); colorbar()
savefig("hankel.delay.$(dly).eps")